24 research outputs found

    Event-based object detection and tracking for space situational awareness

    Get PDF
    In this work, we present an optical space imaging dataset using a range of event-based neuromorphic vision sensors. The unique method of operation of event-based sensors makes them ideal for space situational awareness (SSA) applications due to the sparseness inherent in space imaging data. These sensors offer significantly lower bandwidth and power requirements making them particularly well suited for use in remote locations and space-based platforms. We present the first publicly-accessible event-based space imaging dataset including recordings using sensors from multiple providers, greatly lowering the barrier to entry for other researchers given the scarcity of such sensors and the expertise required to operate them for SSA applications. The dataset contains both day time and night time recordings, including simultaneous co-collections from different event-based sensors. Recorded at a remote site, and containing 572 labeled targets with a wide range of sizes, trajectories, and signal-to-noise ratios, this real-world event-based dataset represents a challenging detection and tracking task that is not readily solved using previously proposed methods. We propose a highly optimized and robust feature-based detection and tracking method, designed specifically for SSA applications, and implemented via a cascade of increasingly selective event filters. These filters rapidly isolate events associated with space objects, maintaining the high temporal resolution of the sensors. The results from this simple yet highly optimized algorithm on the space imaging dataset demonstrate robust high-speed event-based detection and tracking which can readily be implemented on sensor platforms in space as well as terrestrial environments

    An optimized multi-layer spiking neural network implementation in FPGA without multipliers

    Get PDF
    This paper presents an expansion and evaluation of the hardware architecture for the Optimized Deep Event-driven Spiking Neural Network Architecture (ODESA). ODESA is a state-of-the-art, event-driven multi-layer Spiking Neural Network (SNN) architecture that offers an end-to-end, online, and local supervised training method. In previous work, ODESA was successfully implemented on Field-Programmable Gate Array (FPGA) hardware, showcasing its effectiveness in resource-constrained hardware environments. Building upon the previous implementation, this research focuses on optimizing the ODESA network hardware by introducing a novel approach. Specifically, we propose substituting the dot product multipliers in the Neurons with a low-cost shift-register design. This optimization strategy significantly reduces the hardware resources required for implementing a neuron, thereby enabling more complex SNNs to be accommodated within a single FPGA. Additionally, this optimization results in a reduction in power consumption, further enhancing the practicality and efficiency of the hardware implementation. To evaluate the effectiveness of the proposed optimization, extensive experiments and measurements were conducted. The results demonstrate the successful reduction in hardware resource utilization while maintaining the network's functionality and performance. Moreover, the power consumption reduction contributes to the overall energy efficiency of the hardware implementation

    Noise-robust text-dependent speaker identification using cochlear models

    Get PDF
    One challenging issue in speaker identification (SID) is to achieve noise-robust performance. Humans can accurately identify speakers, even in noisy environments. We can leverage our knowledge of the function and anatomy of the human auditory pathway to design SID systems that achieve better noise-robust performance than conventional approaches. We propose a text-dependent SID system based on a real-time cochlear model called cascade of asymmetric resonators with fast-acting compression (CARFAC). We investigate the SID performance of CARFAC on signals corrupted by noise of various types and levels. We compare its performance with conventional auditory feature generators including mel-frequency cepstrum coefficients, frequency domain linear predictions, as well as another biologically inspired model called the auditory nerve model. We show that CARFAC outperforms other approaches when signals are corrupted by noise. Our results are consistent across datasets, types and levels of noise, different speaking speeds, and back-end classifiers. We show that the noise-robust SID performance of CARFAC is largely due to its nonlinear processing of auditory input signals. Presumably, the human auditory system achieves noise-robust performance via inherent nonlinearities as well

    Event-based processing of single photon avalanche diode sensors

    Get PDF
    Single Photon Avalanche Diode sensor arrays operating in direct time of flight mode can perform 3D imaging using pulsed lasers. Operating at high frame rates, SPAD imagers typically generate large volumes of noisy and largely redundant spatio-temporal data. This results in communication bottlenecks and unnecessary data processing. In this work, we propose a neuromorphic processing solution to this problem. By processing the spatio-temporal patterns generated by the SPADs in a local, event-based manner, the proposed 128 imes 128 pixel sensor-processor system reduces the size of output data from the sensor by orders of magnitude while increasing the utility of the output data in the context of challenging recognition tasks. To test the proposed system, the first large scale complex SPAD imaging dataset is captured using an existing 32 imes 32 pixel sensor. The generated dataset consists of 24000 recordings and involves high-speed view-invariant recognition of airplanes with background clutter. The frame-based SPAD imaging dataset is converted via several alternative methods into event-based data streams and processed using the proposed 125 imes 125 receptive field neuromorphic processor as well as a range of feature extractor networks and pooling methods. The output of the proposed event generation methods are then processed by an event-based feature extraction and classification system implemented in FPGA hardware. The event-based processing methods are compared to processing the original frame-based dataset via frame-based but otherwise identical architectures. The results show the event-based methods are superior to the frame-based approach both in terms of classification accuracy and output data-rate

    Event-based feature extraction using adaptive selection thresholds

    Get PDF
    Unsupervised feature extraction algorithms form one of the most important building blocks in machine learning systems. These algorithms are often adapted to the event-based domain to perform online learning in neuromorphic hardware. However, not designed for the purpose, such algorithms typically require significant simplification during implementation to meet hardware constraints, creating trade offs with performance. Furthermore, conventional feature extraction algorithms are not designed to generate useful intermediary signals which are valuable only in the context of neuromorphic hardware limitations. In this work a novel event-based feature extraction method is proposed that focuses on these issues. The algorithm operates via simple adaptive selection thresholds which allow a simpler implementation of network homeostasis than previous works by trading off a small amount of information loss in the form of missed events that fall outside the selection thresholds. The behavior of the selection thresholds and the output of the network as a whole are shown to provide uniquely useful signals indicating network weight convergence without the need to access network weights. A novel heuristic method for network size selection is proposed which makes use of noise events and their feature representations. The use of selection thresholds is shown to produce network activation patterns that predict classification accuracy allowing rapid evaluation and optimization of system parameters without the need to run back-end classifiers. The feature extraction method is tested on both the N-MNIST (Neuromorphic-MNIST) benchmarking dataset and a dataset of airplanes passing through the field of view. Multiple configurations with different classifiers are tested with the results quantifying the resultant performance gains at each processing stage

    Real-time event-based unsupervised feature consolidation and tracking for space situational awareness

    Get PDF
    Earth orbit is a limited natural resource that hosts a vast range of vital space-based systems that support the international community's national, commercial and defence interests. This resource is rapidly becoming depleted with over-crowding in high demand orbital slots and a growing presence of space debris. We propose the Fast Iterative Extraction of Salient targets for Tracking Asynchronously (FIESTA) algorithm as a robust, real-time and reactive approach to optical Space Situational Awareness (SSA) using Event-Based Cameras (EBCs) to detect, localize, and track Resident Space Objects (RSOs) accurately and timely. We address the challenges of the asynchronous nature and high temporal resolution output of the EBC accurately, unsupervised and with few tune-able parameters using concepts established in the neuromorphic and conventional tracking literature. We show this algorithm is capable of highly accurate in-frame RSO velocity estimation and average sub-pixel localization in a simulated test environment to distinguish the capabilities of the EBC and optical setup from the proposed tracking system. This work is a fundamental step toward accurate end-to-end real-time optical event-based SSA, and developing the foundation for robust closed-form tracking evaluated using standardized tracking metrics

    Neuromorphic engineering needs closed-loop benchmarks

    Get PDF
    Neuromorphic engineering aims to build (autonomous) systems by mimicking biological systems. It is motivated by the observation that biological organisms—from algae to primates—excel in sensing their environment, reacting promptly to their perils and opportunities. Furthermore, they do so more resiliently than our most advanced machines, at a fraction of the power consumption. It follows that the performance of neuromorphic systems should be evaluated in terms of real-time operation, power consumption, and resiliency to real-world perturbations and noise using task-relevant evaluation metrics. Yet, following in the footsteps of conventional machine learning, most neuromorphic benchmarks rely on recorded datasets that foster sensing accuracy as the primary measure for performance. Sensing accuracy is but an arbitrary proxy for the actual system's goal—taking a good decision in a timely manner. Moreover, static datasets hinder our ability to study and compare closed-loop sensing and control strategies that are central to survival for biological organisms. This article makes the case for a renewed focus on closed-loop benchmarks involving real-world tasks. Such benchmarks will be crucial in developing and progressing neuromorphic Intelligence. The shift towards dynamic real-world benchmarking tasks should usher in richer, more resilient, and robust artificially intelligent systems in the future

    Rotationally invariant vision recognition with neuromorphic transformation and learning networks

    No full text
    In this paper we present a biologically inspired rotationally-invariant end-to-end recognition system demonstrated in hardware with a bitmap camera and a Field Programmable Gate Array (FPGA). The system integrates the Ripple Pond Network (RPN), a neural network that performs image transformation from two dimensions to one dimensional rotationally invariant temporal patterns (TPs), and the Synaptic Kernel Adaptation Network (SKAN), a neural network capable of unsupervised learning of a spatio-temporal pattern of input spikes. Our results demonstrate rapid learning and recognition of simple hand gestures with no prior training and minimal usage of FPGA hardware

    The Synaptic Kernel Adaptation Network

    No full text
    In this paper we present the Synaptic Kernel Adaptation Network (SKAN) circuit, a dynamic circuit that implements Spike Timing Dependent Plasticity (STDP), not by adjusting synaptic weights but via dynamic synaptic kernels. SKAN performs unsupervised learning of the commonest spatio-temporal pattern of input spikes using simple analog or digital circuits. It features tunable robustness to temporal jitter and will unlearn a pattern that has not been present for a period of time using tunable 'forgetting' parameters. It is compact and scalable for use as a building block in a larger network to form a multilayer hierarchical unsupervised memory system which develops models based on the temporal statistics of its environment. Here we show results from simulations as well present digital and analog implementations. Our results show that the SKAN is fast, accurate and robust to noise and jitter

    Approaches for astrometry using event-based sensors

    No full text
    Event-based sensors are novel optical imaging devices that offer a different paradigm in which to image space and resident space objects. Also known as silicon retinas, these custom silicon devices make use of independent and asynchronous pixels which produce data in the form of events generated in response to changes in log-illumination rather than in the conventional frames produced by CCD-based imaging sensors. This removes the need for fixed exposure times and frame rates but requires new approaches to processing and interpreting the spatio-temporal data produced by these sensors. The individual nature of each pixel also yields a very high dynamic range, and the asynchronous operation provides a high temporal resolution. These characteristics make event-based cameras well suited to terrestrial and orbital space situational awareness tasks. Our previous work with these sensors highlighted the applicability of these devices for detecting and tracking resident space objects from LEO to GEO orbital regimes, both during the night and daytime without modification to the camera or optics. Building upon this previous work in applying these artificial vision systems to space situational awareness tasks, we present a study into approaches for calculating astrometry from the event-based data generated with these devices. The continuous nature of these devices, and their ability to image whilst moving, allows for new and computationally efficient approaches to astrometry, applicable both to high-speed tracking from terrestrial sensors and low-power imaging from orbital platforms. Using data collected during multiple sets of telescope trials involving co-collections between a conventional sensor and multiple event-based sensors, a system capable of identifying stars and positional information whilst simultaneously tracking an object is presented. Two new prototype event-based sensors, offering increased spatial resolution and higher sensitivity, were also used and characterized in the trial, and updated observation results from these improved sensors are presented. These results further demonstrate and validate the applicability and opportunities offered by event-based sensors for space situational awareness and orbital applications
    corecore